• Hadoop Base Station Analyst • 数据处理平台 • 数据分析 & 预处理 • 第一次 Mapper Execution 程序 • 第一次 Reducer Execution 程序 • 第二次 Mapper Execution 程序 • 第二次 Reducer Execution 程序 • Hadoop 集群处理完整数据 • Reference

Hadoop Base Station Analyst

通过分析基站数据集,统计基站的掉话率,将基站 Cell Global Identity (CGI) 按掉话率进行降序排序,并找到掉话率最高的前五个基站。

数据处理平台

采用以伪分布方式部署在 Docker 容器中的 Hadoop 3.4.1 进行数据分析。

数据分析 & 预处理

打开 Comma-separated values (CSV) 格式的数据集,截取其中一小段查看如下:
________________________________________________________________________________ | o o o csv | |================================================================================| | record_time,imei,cell,ph_num,call_num,drop_num,duration,drop_rate,net_type,erl | | 2011-07-13 00:00:00+08,356966,29448-37062,0,0,0,0,0,G,0 | | 2011-07-13 00:00:00+08,352024,29448-51331,0,0,0,0,0,G,0 | | 2011-07-13 00:00:00+08,353736,29448-51331,0,0,0,0,0,G,0 | | 2011-07-13 00:00:00+08,353736,29448-51333,0,0,0,0,0,G,0 | | 2011-07-13 00:00:00+08,351545,29448-51333,0,0,0,0,0,G,0 | | 2011-07-13 00:00:00+08,353736,29448-51343,1,0,0,8,0,G,0 | | 2011-07-13 00:00:00+08,359681,29448-51462,0,0,0,0,0,G,0 | | 2011-07-13 00:00:00+08,354707,29448-51462,0,0,0,0,0,G,0 | | 2011-07-13 00:00:00+08,356137,29448-51470,0,0,0,0,0,G,0 | | 2011-07-13 00:00:00+08,352739,29448-51971,0,0,0,0,0,G,0 | | 2011-07-13 00:00:00+08,354154,29448-51971,0,0,0,0,0,G,0 | | 2011-07-13 00:00:00+08,127580,29448-51971,0,0,0,0,0,G,0 | | 2011-07-13 00:00:00+08,354264,29448-51973,0,0,0,0,0,G,0 | | 2011-07-13 00:00:00+08,354733,29448-51973,1,0,0,36,0,G,0 | | 2011-07-13 00:00:00+08,356807,29448-51973,0,0,0,0,0,G,0 | | 2011-07-13 00:00:00+08,125470,29448-51973,1,0,0,13,0,G,0 | | 2011-07-13 00:00:00+08,353530,29448-52061,1,0,0,46,0,G,0 | | 2011-07-13 00:00:00+08,352417,29448-5231,1,0,0,2,0,G,0 | '================================================================================'
根据表头可以看出,第五列和第八列是我们要关注的信息,即 CGI 和 drop_rate。 CGI 是一串以 数字-数字 形式编码的字符串,drop_rate 是 0~100 区间范围内的数字字符串。除了空数据,不对数据进行额外的预处理。 数据处理过程中以 平均掉话率 作为基准,即 该基站掉话率数值总和 / 该基站条目数量 来对基站进行排序。 预计处理过程如下: • 第一次 Map 对原始数据中的 CGI 和 drop_rate 进行映射。 • 第一次 Shuffle(由 Mapreduce 框架自动完成)对一个基站的掉话率进行归并。 • 第一次 Reduce 求出每一个基站的平均掉话率。 • 第二次 Map 对第一次 Reduce 的输出进行置换,形成如 掉话率 制表符 CGI 的数据,并发往标准输出。 • 第二次 Shuffle(由 Mapreduce 框架自动完成)对掉话率进行排序。(仅利用 Shuffle 的排序特性) • 第二次 Reduce 将第二次 Shuffle 的输出置换,形成如 CGI 制表符 升序排序的掉话率 的数据,并发往标准输出。 需要进行两次 Mapreduce 的原因是,Hadoop Streaming 没有对排序过程进行操作的接口,只能按字典序对 Key 进行升序排序,第二次 Mapreduce 过程只是利用 Shuffle 处理的特性对数据进行排序。当然排序过程是分布式进行的,要不就直接在程序中处理了。 程序均使用 Hadoop Streaming 工具运行,使得数据处理框架与编程语言无关,下面的 Mapper Execution(以下简称 Mapper)和 Reducer Execution(以下简称 Reducer)均由 Python 实现:

第一次 Mapper Execution 程序

#!/usr/bin/env python3 import sys try: for line in sys.stdin: line = line.strip().split(',') # 提取CGI Cell Global Identity cgi = line[2] # 提取drop rate drop = line[-3] print(f'{cgi}\t{drop}') except IndexError: raise

第一次 Reducer Execution 程序

#!/usr/bin/env python3 import sys from collections import defaultdict drop_rate_list = defaultdict(list) for line in sys.stdin: k, v = line.strip().split('\t', 1) drop_rate_list[k].append(float(v)) for cgi, drop_rate in drop_rate_list.items(): print(f'{cgi}\t{sum(drop_rate) / len(drop_rate):.5f}')
对第一次 Mapreduce 过程进行本地测试:
cat test.csv | ./mapexe_1.py | sort | ./reducexe_1.py
截取部分输出内容如下,可以看到,已经统计出每一个基站的平均掉话率:
_________________________ | o o o Python | |=========================| | 29560-42040 0.19802 | | 29560-42041 0.15174 | | 29560-42042 0.25510 | | 29560-42081 0.11919 | | 29560-42082 0.21978 | | 29560-42083 0.13106 | | 29560-42090 0.00000 | | 29560-42101 0.24096 | | 29560-42102 0.27174 | | 29560-42110 0.27472 | | 29560-42111 0.27624 | | 29560-42112 0.24938 | | 29560-42113 0.27624 | | 29560-42121 0.09242 | | 29560-42122 0.33333 | | 29560-42123 0.29155 | | 29560-42131 0.20964 | | 29560-42132 0.07831 | | 29560-42133 0.09390 | | 29560-42190 0.13369 | '========================='

第二次 Mapper Execution 程序

#!/usr/bin/env python3 import sys try: for line in sys.stdin: cgi, drop_rate = line.strip().split('\t') # 由于shuffle过程采用字典排序,需要补零 drop_rate = f'{drop_rate.split(".")[0]:0>3}' + '.' + drop_rate.split('.')[1] print(f'{drop_rate}\t{cgi}') except IndexError: raise

第二次 Reducer Execution 程序

#!/usr/bin/env python3 import sys for line in sys.stdin: drop_rate, cgi = line.strip().split('\t') print(f'{cgi}\t{drop_rate}')
对完整的数据处理过程进行本地测试:
cat test.csv | ./mapexe_1.py | sort | ./reducexe_1.py | ./mapexe_2.py | sort | ./reducexe_2.py
截取部分输出内容如下,可以看到,已经统计出每一个基站的平均掉话率, 并通过升序排序:
___________________________ | o o o Python | |===========================| | 29560-42040 000.19802 | | 29560-41870 000.20080 | | 29560-42131 000.20964 | | 29560-41571 000.21053 | | 29560-4131 000.21834 | | 29560-42082 000.21978 | | 29560-41611 000.22075 | | 29560-4133 000.22472 | | 29560-41712 000.23095 | | 29560-41872 000.23923 | | 29560-42101 000.24096 | | 29560-42112 000.24938 | | 29560-42042 000.25510 | | 29560-42102 000.27174 | | 29560-42110 000.27472 | | 29560-42111 000.27624 | | 29560-42113 000.27624 | | 29560-42123 000.29155 | | 29560-41842 000.31949 | | 29560-42122 000.33333 | | 29560-41993 000.34130 | | 29560-40953 000.35461 | | 29560-40950 000.35842 | | 29560-40952 000.50000 | | 29560-42023 000.55866 | | 29560-4132 000.64103 | | 29560-42022 000.64103 | | 29560-4013 000.65789 | | 29560-4003 001.02041 | | 29560-4002 001.08696 | | 29464-18282 011.11111 | | 29464-19381 033.33333 | | 29464-18744 050.00000 | | 29528-16231 050.00000 | | 29464-10043 100.00000 | | 29464-18323 100.00000 | '==========================='

Hadoop 集群处理完整数据

project 目录结构如下:
_______________________ | o o o bash | |=======================| | . | | ├── init.sh | | └── station | | ├── mapexe_1.py | | ├── mapexe_2.py | | ├── reducexe_1.py | | ├── reducexe_2.py | | ├── station.csv | | └── test.csv | '======================='
📃NOTE 注意在上传之前删除 CSV 文件中的表头,否则会对解析造成影响,导致第一次 Mapper 程序运行出现 ValueError: could not convert string to float: 'drop_rate'。
初始化脚本 init.sh 如下,该脚本完成数据文件的上传,并在目标目录生成 Hadoop 启动脚本:
#!/bin/bash # See: https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/FileSystemShell.html#put project_name=station for (( i = 1; i <= 2; i++ )) do if [ $[i] -eq 1 ]; then job_file=station.csv elif [ $[i] -eq 2 ]; then job_file=reduce_output fi cat <<-SCRIPT > ${project_name}/${project_name}_${i}.sh #!/bin/bash -x # create folder hdfs dfs -mkdir -p input_${project_name}_${i} hdfs dfs -put -f ${job_file} input_${project_name}_${i} # start job mapred streaming \ -input input_${project_name}_${i}/${job_file} \ -output output_${project_name}_${i} \ -mapper mapexe_${i}.py \ -reducer reducexe_${i}.py \ -file mapexe_${i}.py \ -file reducexe_${i}.py SCRIPT done # transmit file sudo docker cp ${project_name} hadoop_single_node:/home/singlenode/ sudo docker exec hadoop_single_node sudo chown -R singlenode:singlenode ${project_name} sudo docker exec hadoop_single_node sudo chmod -R +x ${project_name}/*.py ${project_name}/*.sh
初始化脚本上传任务数据文件至集群,并生成两个 Mapreduce 任务创建脚本。 运行一个 Mapreduce 脚本,截取部分输出:
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ | o o o bash | |=================================================================================================================================================================================================================================================| | singlenode@singlenode:~/station$ ls | | mapexe_1.py mapexe_2.py reducexe_1.py reducexe_2.py 'station bk.csv' station.csv station_1.sh station_2.sh test.csv | | singlenode@singlenode:~/station$ ./station_1.sh | | + hdfs dfs -mkdir -p input_station_1 | | + hdfs dfs -put -f station.csv input_station_1 | | + mapred streaming -input input_station_1/station.csv -output output_station_1 -mapper mapexe_1.py -reducer reducexe_1.py -file mapexe_1.py -file reducexe_1.py | | 2024-12-10 08:43:19,058 WARN streaming.StreamJob: -file option is deprecated, please use generic option -files instead. | | packageJobJar: [mapexe_1.py, reducexe_1.py] [] /tmp/streamjob10481481972442005223.jar tmpDir=null | | 2024-12-10 08:43:20,094 INFO impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties | | 2024-12-10 08:43:20,238 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). | | 2024-12-10 08:43:20,239 INFO impl.MetricsSystemImpl: JobTracker metrics system started | | 2024-12-10 08:43:20,265 WARN impl.MetricsSystemImpl: JobTracker metrics system already initialized! | | 2024-12-10 08:43:20,544 INFO mapred.FileInputFormat: Total input files to process : 1 | | 2024-12-10 08:43:20,616 INFO mapreduce.JobSubmitter: number of splits:1 | | 2024-12-10 08:43:20,836 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1722600033_0001 | | 2024-12-10 08:43:20,836 INFO mapreduce.JobSubmitter: Executing with tokens: [] | | 2024-12-10 08:43:21,106 INFO mapred.LocalDistributedCacheManager: Localized file:/home/singlenode/station/mapexe_1.py as file:/tmp/hadoop-singlenode/mapred/local/job_local1722600033_0001_753b8a88-a4f1-4361-a10e-eb80e85f98ab/mapexe_1.py | | 2024-12-10 08:43:21,136 INFO mapred.LocalDistributedCacheManager: Localized file:/home/singlenode/station/reducexe_1.py as file:/tmp/hadoop-singlenode/mapred/local/job_local1722600033_0001_de1e189c-b29a-4d19-b087-b48fb56f454c/reducexe_1.py | | 2024-12-10 08:43:21,222 INFO mapreduce.Job: The url to track the job: http://localhost:8080/ | | 2024-12-10 08:43:21,225 INFO mapreduce.Job: Running job: job_local1722600033_0001 | | 2024-12-10 08:43:21,225 INFO mapred.LocalJobRunner: OutputCommitter set in config null | | 2024-12-10 08:43:21,237 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapred.FileOutputCommitter | '================================================================================================================================================================================================================================================='
``` 查看 HDFS 中的 output 内文件内容如下:
_______________________________________________________________________________________________________________ | o o o bash | |===============================================================================================================| | singlenode@singlenode:~/station$ hdfs dfs -ls /user/singlenode/output_station_1 | | Found 2 items | | -rw-r--r-- 3 singlenode supergroup 0 2024-12-10 08:43 /user/singlenode/output_station_1/_SUCCESS | | -rw-r--r-- 3 singlenode supergroup 38979 2024-12-10 08:43 /user/singlenode/output_station_1/part-00000 | | singlenode@singlenode:~/station$ hdfs dfs -ls /user/singlenode/output_station_1/part-00000 | | -rw-r--r-- 3 singlenode supergroup 38979 2024-12-10 08:43 /user/singlenode/output_station_1/part-00000 | | singlenode@singlenode:~/station$ hdfs dfs -head /user/singlenode/output_station_1/part-00000 | | 29448-37062 0.00000 | | 29448-51331 0.00000 | | 29448-51333 0.00000 | | 29448-51343 0.00000 | | 29448-51462 0.00000 | | 29448-51470 0.00000 | | 29448-51971 0.00000 | | 29448-51973 0.00000 | | 29448-52061 0.00000 | | 29448-5231 0.00000 | | 29448-5233 0.00000 | | 29448-52541 0.00000 | | 29448-53050 0.00000 | | 29448-53523 0.00000 | | 29448-53871 0.00000 | | 29448-54853 0.00000 | | 29448-54874 0.00000 | | 29448-55671 0.00000 | | 29448-55672 0.00000 | | 29448-55813 0.00000 | | 29448-55823 0.00000 | | 29448-5613 0.00000 | '==============================================================================================================='
可以看到,在第一次 Mapreduce 处理完成后,已经生成了形式为 CGI 制表符 掉话率 的映射文件,并且从中可以看出,大部分的基站是很稳定的,掉话率为 0。 修改 station_2.sh 脚本,将 mapred streaming -input 选项的参数更改为第一次 Mapreduce 的输出文件。左侧高亮的行即为要修改的内容。 也可以删除 hdfs dfs -put -f reduce_output input_station_2hdfs dfs -mkdir -p input_station_2 两行,不删除也可,不影响运行。为了脚本生成流程一致性,所以产生了这些内容。这是初始化脚本需要改进的部分,这一行也在左侧进行高亮。
____________________________________________________ | o o o bash | |====================================================| | #!/bin/bash -x | | # create folder | | hdfs dfs -mkdir -p input_station_2 | | hdfs dfs -put -f reduce_output input_station_2 | | # start job | | mapred streaming \ | | -input output_station_1/part-00000 \ | | -output output_station_2 \ | | -mapper mapexe_2.py \ | | -reducer reducexe_2.py \ | | -file mapexe_2.py \ | | -file reducexe_2.py | '===================================================='
修改之后,后运行第二次 Mapreduce 任务,截取部分输出内容如下:
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ | o o o bash | |=================================================================================================================================================================================================================================================| | singlenode@singlenode:~/station$ ls | | mapexe_1.py mapexe_2.py reducexe_1.py reducexe_2.py 'station bk.csv' station.csv station_1.sh station_2.sh test.csv | | singlenode@singlenode:~/station$ ./station_2.sh | | + mapred streaming -input output_station_1/part-00000 -output output_station_2 -mapper mapexe_2.py -reducer reducexe_2.py -file mapexe_2.py -file reducexe_2.py | | 2024-12-10 09:19:28,686 WARN streaming.StreamJob: -file option is deprecated, please use generic option -files instead. | | packageJobJar: [mapexe_2.py, reducexe_2.py] [] /tmp/streamjob8019497484163869617.jar tmpDir=null | | 2024-12-10 09:19:29,641 INFO impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties | | 2024-12-10 09:19:29,812 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). | | 2024-12-10 09:19:29,812 INFO impl.MetricsSystemImpl: JobTracker metrics system started | | 2024-12-10 09:19:29,834 WARN impl.MetricsSystemImpl: JobTracker metrics system already initialized! | | 2024-12-10 09:19:30,153 INFO mapred.FileInputFormat: Total input files to process : 1 | | 2024-12-10 09:19:30,225 INFO mapreduce.JobSubmitter: number of splits:1 | | 2024-12-10 09:19:30,463 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1527443538_0001 | | 2024-12-10 09:19:30,463 INFO mapreduce.JobSubmitter: Executing with tokens: [] | | 2024-12-10 09:19:30,739 INFO mapred.LocalDistributedCacheManager: Localized file:/home/singlenode/station/mapexe_2.py as file:/tmp/hadoop-singlenode/mapred/local/job_local1527443538_0001_32e0e2d6-27cd-4640-ba72-f5d86afc6228/mapexe_2.py | | 2024-12-10 09:19:30,774 INFO mapred.LocalDistributedCacheManager: Localized file:/home/singlenode/station/reducexe_2.py as file:/tmp/hadoop-singlenode/mapred/local/job_local1527443538_0001_3f9b696a-0290-4bd8-b406-b3608c16106c/reducexe_2.py | | 2024-12-10 09:19:30,859 INFO mapreduce.Job: The url to track the job: http://localhost:8080/ | | 2024-12-10 09:19:30,862 INFO mapreduce.Job: Running job: job_local1527443538_0001 | | 2024-12-10 09:19:30,863 INFO mapred.LocalJobRunner: OutputCommitter set in config null | | 2024-12-10 09:19:30,868 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapred.FileOutputCommitter | '================================================================================================================================================================================================================================================='
查看 HDFS 中的 output 内文件内容如下:
_______________________________________________________________________________________________________________ | o o o bash | |===============================================================================================================| | singlenode@singlenode:~/station$ hdfs dfs -ls /user/singlenode/output_station_2 | | Found 2 items | | -rw-r--r-- 3 singlenode supergroup 0 2024-12-10 09:19 /user/singlenode/output_station_2/_SUCCESS | | -rw-r--r-- 3 singlenode supergroup 42914 2024-12-10 09:19 /user/singlenode/output_station_2/part-00000 | | singlenode@singlenode:~/station$ hdfs dfs -tail /user/singlenode/output_station_2/part-00000 | | 0133 002.27273 | | 29560-4723 002.32558 | | 29608-45022 002.32558 | | 58165-20053 002.32558 | | 58166-40121 002.32558 | | 58165-20033 002.32558 | | 58167-50073 002.38095 | | 58167-50042 002.38095 | | 58165-20113 002.43902 | | 58155-30073 002.43902 | | 58166-40083 002.50000 | | 58155-30092 002.50000 | | 58165-20152 002.63158 | | 58155-30023 002.63158 | | 29576-5581 002.70270 | | 29576-5572 002.70270 | | 58166-40113 002.85714 | | 29560-46203 002.94118 | | 58167-50102 002.94118 | | 58167-50051 002.94118 | | 58166-40022 003.03030 | | 58165-20193 003.03030 | | 58166-40041 003.03030 | | 58155-30122 003.12500 | | 58155-30141 003.22581 | | 58167-50132 003.22581 | | 58165-20042 003.33333 | | 58166-40053 003.44828 | | 58166-40091 003.84615 | | 58167-50111 003.84615 | | 58155-30053 003.84615 | | 58155-30123 003.84615 | | 58165-20082 003.84615 | | 58165-20121 003.84615 | | 58166-40093 004.00000 | | 58166-40061 004.16667 | | 58165-20191 004.16667 | | 29560-45013 004.34783 | | 58167-58011 005.55556 | | 58165-28051 005.55556 | | 58155-38021 011.11111 | | 29464-18282 011.11111 | | 29464-19381 033.33333 | | 29464-18744 050.00000 | | 29528-16231 050.00000 | | 29464-18323 100.00000 | | 29464-10043 100.00000 | '==============================================================================================================='
可以看到,第二次 Mapreduce 处理对第一次的输出按基站掉话率进行了排序,从中可以看出,掉话率最高的前 5 的基站分别是: • 基站 CGI:29464-10043,掉话率:100.00000 • 基站 CGI:29464-18323,掉话率:100.00000 • 基站 CGI:29528-16231,掉话率:050.00000 • 基站 CGI:29464-18744,掉话率:050.00000 • 基站 CGI:29464-19381,掉话率:033.33333

Reference

Apache Hadoop 3.4.1 – Hadoop: Setting up a Single Node Cluster.Apache Hadoop 3.4.1 – HDFS Commands GuideApache Hadoop MapReduce Streaming – Hadoop Streaming
Create: Thu Dec 12 21:49:50 2024 Last Modified: Thu Dec 12 21:49:50 2024
_____ _______ _____ _______ /\ \ /::\ \ /\ \ /::\ \ /::\____\ /::::\ \ /::\____\ /::::\ \ /::::| | /::::::\ \ /::::| | /::::::\ \ /:::::| | /::::::::\ \ /:::::| | /::::::::\ \ /::::::| | /:::/~~\:::\ \ /::::::| | /:::/~~\:::\ \ /:::/|::| | /:::/ \:::\ \ /:::/|::| | /:::/ \:::\ \ /:::/ |::| | /:::/ / \:::\ \ /:::/ |::| | /:::/ / \:::\ \ /:::/ |::|___|______ /:::/____/ \:::\____\ /:::/ |::| | _____ /:::/____/ \:::\____\ /:::/ |::::::::\ \ |:::| | |:::| | /:::/ |::| |/\ \ |:::| | |:::| | /:::/ |:::::::::\____\|:::|____| |:::|____|/:: / |::| /::\____\|:::|____| |:::|____| \::/ / ~~~~~/:::/ / \:::\ \ /:::/ / \::/ /|::| /:::/ / \:::\ \ /:::/ / \/____/ /:::/ / \:::\ \ /:::/ / \/____/ |::| /:::/ / \:::\ \ /:::/ / /:::/ / \:::\ /:::/ / |::|/:::/ / \:::\ /:::/ / /:::/ / \:::\__/:::/ / |::::::/ / \:::\__/:::/ / /:::/ / \::::::::/ / |:::::/ / \::::::::/ / /:::/ / \::::::/ / |::::/ / \::::::/ / /:::/ / \::::/ / /:::/ / \::::/ / /:::/ / \::/____/ /:::/ / \::/____/ \::/ / \::/ / \/____/ \/____/ _____ _____ _____ _____ _____ /\ \ /\ \ /\ \ /\ \ /\ \ /::\ \ /::\ \ /::\ \ /::\ \ /::\ \ /::::\ \ /::::\ \ /::::\ \ /::::\ \ /::::\ \ /::::::\ \ /::::::\ \ /::::::\ \ /::::::\ \ /::::::\ \ /:::/\:::\ \ /:::/\:::\ \ /:::/\:::\ \ /:::/\:::\ \ /:::/\:::\ \ /:::/__\:::\ \ /:::/__\:::\ \ /:::/__\:::\ \ /:::/ \:::\ \ /:::/__\:::\ \ \:::\ \:::\ \ /::::\ \:::\ \ /::::\ \:::\ \ /:::/ \:::\ \ /::::\ \:::\ \ ___\:::\ \:::\ \ /::::::\ \:::\ \ /::::::\ \:::\ \ /:::/ / \:::\ \ /::::::\ \:::\ \ /\ \:::\ \:::\ \ /:::/\:::\ \:::\____\ /:::/\:::\ \:::\ \ /:::/ / \:::\ \ /:::/\:::\ \:::\ \ /::\ \:::\ \:::\____\/:::/ \:::\ \:::| |/:::/ \:::\ \:::\____\/:::/____/ \:::\____\/:::/__\:::\ \:::\____\ \:::\ \:::\ \::/ /\::/ \:::\ /:::|____|\::/ \:::\ /:::/ /\:::\ \ \::/ /\:::\ \:::\ \::/ / \:::\ \:::\ \/____/ \/_____/\:::\/:::/ / \/____/ \:::\/:::/ / \:::\ \ \/____/ \:::\ \:::\ \/____/ \:::\ \:::\ \ \::::::/ / \::::::/ / \:::\ \ \:::\ \:::\ \ \:::\ \:::\____\ \::::/ / \::::/ / \:::\ \ \:::\ \:::\____\ \:::\ /:::/ / \::/____/ /:::/ / \:::\ \ \:::\ \::/ / \:::\/:::/ / /:::/ / \:::\ \ \:::\ \/____/ \::::::/ / /:::/ / \:::\ \ \:::\ \ \::::/ / /:::/ / \:::\____\ \:::\____\ \::/ / \::/ / \::/ / \::/ / \/____/ \/____/ \/____/ \/____/